Seed Coder 8B Instruct GGUF
MIT
This model has undergone self-quantization processing, with output and embedding tensors quantized to f16 format, and the remaining tensors quantized to q5_k or q6_k format, resulting in a smaller size while maintaining performance comparable to pure f16.
Large Language Model English